PAC-Bayes has recently re-emerged as an effective theory with which one can derive principled learning algorithms with tight performance guarantees. However, applications of PAC-Bayes to bandit problems are relatively rare, which is a great misfortune. Many decision-making problems in healthcare, finance and natural sciences can be modelled as bandit problems. In many of these applications, principled algorithms with strong performance guarantees would be very much appreciated. This survey provides an overview of PAC-Bayes performance bounds for bandit problems and an experimental comparison of these bounds. Our experimental comparison has revealed that available PAC-Bayes upper bounds on the cumulative regret are loose, whereas available PAC-Bayes lower bounds on the expected reward can be surprisingly tight. We found that an offline contextual bandit algorithm that learns a policy by optimising a PAC-Bayes bound was able to learn randomised neural network polices with competitive expected reward and non-vacuous performance guarantees.
translated by 谷歌翻译
Recent NLP models have the great ability to generalise `zero-shot' to new tasks using only an instruction as guidance. However, these approaches usually repeat their instructions with every input, requiring costly reprocessing of lengthy instructions for every inference example. To alleviate this, we introduce Hypernetworks for INstruction Tuning (HINT), which convert task instructions and examples using a pretrained text encoder into parameter-efficient modules inserted into an underlying model, eliminating the need to include instructions in the model input. Compared to prior approaches that concatenate instructions with every input instance, we find that HINT models are significantly more compute-efficient and consistently outperform these approaches for a given inference budget.
translated by 谷歌翻译
Language models trained on massive prompted multitask datasets like T0 (Sanh et al., 2021) or FLAN (Wei et al., 2021a) can generalize to tasks unseen during training. We show that training on a carefully chosen subset of instances can outperform training on all available data on a variety of datasets. We assume access to a small number (250--1000) of unlabeled target task instances, select their nearest neighbors from a pool of multitask data, and use the retrieved data to train target task-specific models. Our method is more data-efficient than training a single multitask model, while still outperforming it by large margins. We evaluate across a diverse set of tasks not in the multitask pool we retrieve from, including those used to evaluate T0 and additional complex tasks including legal and scientific document QA. We retrieve small subsets of P3 (the collection of prompted datasets from which T0's training data was sampled) and finetune T5 models that outperform the 3-billion parameter variant of T0 (T0-3B) by 3--30% on 12 out of 14 evaluation datasets while using at most 2% of the data used to train T0-3B. These models also provide a better initialization than T0-3B for few-shot finetuning on target-task data, as shown by a 2--23% relative improvement over few-shot finetuned T0-3B models on 8 datasets. Our code is available at https://github.com/allenai/data-efficient-finetuning.
translated by 谷歌翻译
New technologies and the availability of geospatial data have drawn attention to spatio-temporal biases present in society. For example: the COVID-19 pandemic highlighted disparities in the availability of broadband service and its role in the digital divide; the environmental justice movement in the United States has raised awareness to health implications for minority populations stemming from historical redlining practices; and studies have found varying quality and coverage in the collection and sharing of open-source geospatial data. Despite the extensive literature on machine learning (ML) fairness, few algorithmic strategies have been proposed to mitigate such biases. In this paper we highlight the unique challenges for quantifying and addressing spatio-temporal biases, through the lens of use cases presented in the scientific literature and media. We envision a roadmap of ML strategies that need to be developed or adapted to quantify and overcome these challenges -- including transfer learning, active learning, and reinforcement learning techniques. Further, we discuss the potential role of ML in providing guidance to policy makers on issues related to spatial fairness.
translated by 谷歌翻译
在过去的几年中,使用机器学习的编程语言处理(PLP)取得了广泛的改进。越来越多的人有兴趣探索这个有前途的领域。但是,对于新的研究人员和开发人员来说,要找到合适的组件来构建自己的机器学习管道,鉴于要解决的多样化的PLP任务,已发布大量数据集和模型,以及一组复杂的编译器或工具。涉及。为了改善机器学习组件的可发现性,可访问性,互操作性和可重复性(公平性),我们在基于机器学习的PLP领域中收集和分析了一组代表性论文。然后,我们确定并表征关键概念,包括PLP任务,模型架构和支持工具。最后,我们展示了一些利用可重复使用的组件来构建机器学习管道以解决一组PLP任务的例子。
translated by 谷歌翻译
在本文中,我们分析了面部图像中基本身份的基本3D形状如何扭曲其整体外观,尤其是从深面识别的角度来看。正如在流行的训练数据增强方案中所做的那样,我们以随机选择或最合适的3D面部模型的形式渲染真实和合成的面部图像,以产生基本身份的新视图。我们比较了这些图像产生的深度特征,以评估这些渲染引入原始身份的扰动。我们以各种程度的面部偏航进行了这种分析,基本身份的性别和种族各不相同。此外,我们调查在这些渲染图像中添加某种形式的上下文和背景像素,当用作训练数据时,进一步改善了面部识别模型的下游性能。我们的实验证明了面部形状在准确的面部匹配中的重要性,并基于上下文数据对网络训练的重要性。
translated by 谷歌翻译
Multi-robot systems face challenges in reducing human interventions as they are often deployed in dangerous environments. It is therefore necessary to include a methodology to assess robot failure rates to reduce the requirement for costly human intervention. A solution to this problem includes robots with the ability to work together to ensure mission resilience. To prevent this intervention, robots should be able to work together to ensure mission resilience. However, robotic platforms generally lack built-in interconnectivity with other platforms from different vendors. This work aims to tackle this issue by enabling the functionality through a bidirectional digital twin. The twin enables the human operator to transmit and receive information to and from the multi-robot fleet. This digital twin considers mission resilience and autonomous and human-led decision making to enable the resilience of a multi-robot fleet. This creates the cooperation, corroboration, and collaboration of diverse robots to leverage the capability of robots and support recovery of a failed robot.
translated by 谷歌翻译
准确的真实量子系统模型对于调查其行为很重要,但难以弥补经验。在这里,我们报告了一种算法 - 量子模型学习代理(QMLA) - 逆向工程师Hamiltonian对目标系统的描述。我们在许多模拟实验中测试QMLA的性能,展示了候选人汉密尔顿模型设计的几种机制,同时娱乐了许多关于治疗研究系统的物理相互作用的性质的许多假设。当提供有限的先验信息和控制实验设置时,显示QMLA在大多数实例中识别真实模型。我们的协议可以探索ising,Heisenberg和Hubbard系列的模型并行,可靠地识别最能描述系统动态的家庭。我们通过纳入遗传算法制定新的假设模型,展示在大型模型空间上运行的QMLA。该特征传播到下一代的模型的选择基于ELO评级方案启发的客观函数,通常用于评估竞争对手,例如国际象棋和足球。在所有情况下,我们的协议查找与真实模型相比展出$ f_1 $ -score $ \ ge 0.88 $的型号,并且精确地识别了72%的案件中的真实模型,同时探索超过250,000美元的潜在模型的空间。通过测试目标系统实际发生的相互作用,QMLA是一种可行的工具,用于探索基本物理和量子器件的表征和校准。
translated by 谷歌翻译
在迅速增长的海上风电场市场中出现了增加风力涡轮机尺寸和距离的全球趋势。在英国,海上风电业于2019年生产了英国最多的电力,前一年增加了19.6%。目前,英国将进一步增加产量,旨在增加安装的涡轮机容量74.7%,如最近的冠村租赁轮次反映。通过如此巨大的增长,该部门现在正在寻求机器人和人工智能(RAI),以解决生命周期服务障碍,以支持可持续和有利可图的海上风能生产。如今,RAI应用主要用于支持运营和维护的短期目标。然而,前进,RAI在海上风基础设施的全部生命周期中有可能发挥关键作用,从测量,规划,设计,物流,运营支持,培训和退役。本文介绍了离岸可再生能源部门的RAI的第一个系统评论之一。在当前和未来的要求方面,在行业和学术界的离岸能源需求分析了rai的最先进的。我们的评论还包括对支持RAI的投资,监管和技能开发的详细评估。通过专利和学术出版数据库进行详细分析确定的关键趋势,提供了对安全合规性和可靠性的自主平台认证等障碍的见解,这是自主车队中可扩展性的数字架构,适应性居民运营和优化的适应性规划人机互动对人与自治助理的信赖伙伴关系。
translated by 谷歌翻译
虹膜识别生活人员是一项成熟的生物识别方式,这些模型已通过政府ID计划,边境交通,选民登记和重复,以解锁移动电话。另一方面,最近出现了识别死者模式的死者受试者的可能性。在本文中,我们提出了一种基于端到端的深度学习方法,用于后期虹膜虹膜分割和具有特殊可视化技术的识别,旨在支持您的努力中取证人类审查员。所提出的后期虹膜分割方法优于现有技术,并且除虹膜环上,如古典虹膜分割方法 - 检测眼部分解过程所引起的异常区域,如犁沟或干燥和皱纹的不规则镜面亮点角膜。该方法培训并验证了从171名尸体获取的数据,保存在核心条件下,并在从259名死亡科目获得的主题脱节数据上进行测试。据我们所知,这是迄今为止迄今为止的虹膜识别研究中使用的最大数据核心。纸张提供了该方法的源代码。测试数据将通过刑事司法数据(NACJD)档案馆的国家档案提供。
translated by 谷歌翻译